IGCF 2024: Deepfakes could potentially result in global losses exceeding $10 trillion by 2025

Governments and academic institutions have a responsibility to implement robust measures to ensure digital safety, experts say

IGCF 2024: Deepfakes could potentially result in global losses exceeding $10 trillion by 2025
Caption: IGCF 2024 panelists included cybersecurity expert and startup founder Hector Monsegur, AI and digital transformation specialist Nader Al Gazal, Metaverse (Facebook) co-founder Alan Smithson, and Dr. Inhyok Cha, Professor at Gwangju Institute for Science and Technology and Deputy President for Global Cooperation, South Korea.
Source: Gulf News

Sharjah:  At the International Government Communication Forum (IGCF 2024) in Sharjah, experts warned that deepfake technology—AI-generated content that manipulates audio, video, and images—could cause global losses of over $10 trillion by 2025. During a session titled “Why Resilient Governments are Building Protective Shields with Artificial Intelligence,” the panel underscored the urgent need for governments to develop tools and strategies to combat this rising threat.

Call for government intervention 

Hector Monsegur, founder of a cybersecurity startup, security researcher, and Director of Research, highlighted the absence of real-time tools to detect deepfakes and urged governments to implement preventive measures to minimize potential damage. "At some point, everyone will face the risks of deepfakes," Monsegur said, stressing the importance of proactive strategies.

He suggested developing a multi-factor authentication tool for social media channels, such as WhatsApp, which would allow an individual to click on their friend’s name and send an authentication request when they are on a call with them to determine whether the person is real.

Alan Smithson, co-founder of Metaverse, described deepfakes as part of the “paradox of technology”—a tool that can be used for both creative and malicious purposes. He emphasized that governments are responsible for ensuring that AI is used ethically, especially in protecting democratic processes and public trust.

 Dr. Cha, Dr Inhyok Cha, Professor at Gwangju Institute for Science and Technology and Deputy President for Global Cooperation, pointed out that deepfake technology is becoming more accessible, enabling individuals to create convincing, yet potentially harmful, content. 

“One of the biggest problems of deepfakes is the use of this technology for sexually nefarious purposes,” he said.

“In some instances, women may not have stringent laws for their rights. One fear women have of the metaverse is that people will behave badly, thinking that the technological freedom they acquire with this new tool will exempt them from behaving well. From a government point of view, every time a new technology emerges, they have the task of educating people that with great freedom comes great responsibility.”

Nader Al Gazal, Nader Al Gazal, academic and expert on AI and Digital Transformation, stressed the need for leveraging AI tools to regulate deepfakes, even as their prevalence grows. Monsegur noted the current lack of tools to detect deepfake in real time but highlighted the importance of implementing preventive measures to minimize damage.

The challenges

Deepfakes have the potential to severely damage public trust in media and news outlets. This loss of confidence could ripple out, threatening the legitimacy of state institutions and undermining democratic processes, especially when deepfakes are used to sway elections or shape public opinion. Crossing these boundaries poses serious risks, including the potential breakdown of societal structures and stability.

Regulating AI: US, EU, and China's early steps

The conversation also addressed the regulatory steps taken by global powers like the US, EU, and China to control the impact of AI and deepfakes. Smithson pointed to California’s recent legislation banning deepfakes in elections and called for more widespread regulation. The US is working on laws like the Defending Democracy from Deepfake Deception Actto hold online platforms accountable for AI-generated content.

Education as the first line of defense

The panel concluded by emphasizing that public education remains the most powerful tool in combating the deepfake threat. As Smithson noted, “An educated public is the best defense against the misuse of AI.” Governments must therefore prioritize digital safety programs in schools and workplaces to keep pace with the evolving landscape of AI technologies.

While deepfakes were just one of the topics addressed under the broader session on AI, the discussions underscored the urgent need for governments to stay ahead of this evolving threat.